We address the problem of designing stabilizing control policies for nonlinear systems in discrete-time, while minimizing an arbitrary cost function. When the system is linear and the cost is convex, the System Level Synthesis (SLS) approach offers an effective solution based on convex programming. Beyond this case, a globally optimal solution cannot be found in a tractable way, in general. In this paper, we develop a parametrization of all and only the control policies stabilizing a given time-varying nonlinear system in terms of the combined effect of 1) a strongly stabilizing base controller and 2) a stable SLS operator to be freely designed. Based on this result, we propose a Neural SLS (Neur-SLS) approach guaranteeing closed-loop stability during and after parameter optimization, without requiring any constraints to be satisfied. We exploit recent Deep Neural Network (DNN) models based on Recurrent Equilibrium Networks (RENs) to learn over a rich class of nonlinear stable operators, and demonstrate the effectiveness of the proposed approach in numerical examples.
translated by 谷歌翻译
大规模的网络物理系统要求将控制策略分发,即它们仅依靠本地实时测量和与相邻代理的通信。然而,即使在看似简单的情况下,最佳分布式控制(ODC)问题也是非常棘手的。因此,最近的工作已经提出了培训神经网络(NN)分布式控制器。 NN控制器的主要挑战是它们在训练期间和之后不可依赖于训练,即,闭环系统可能不稳定,并且由于消失和爆炸梯度,训练可能失效。在本文中,我们解决了非线性端口 - 哈密顿(PH)系统网络的这些问题,其建模功率从能量系统到非完全车辆和化学反应。具体地,我们采用pH系统的组成特性,以表征具有内置闭环稳定性保证的深哈密顿控制政策,而不管互连拓扑和所选择的NN参数。此外,我们的设置可以利用近来表现良好的神经杂志的结果,以防止通过设计消失消失的梯度现象。数值实验证实了所提出的架构的可靠性,同时匹配通用神经网络策略的性能。
translated by 谷歌翻译
Deep Neural Networks (DNNs) training can be difficult due to vanishing and exploding gradients during weight optimization through backpropagation. To address this problem, we propose a general class of Hamiltonian DNNs (H-DNNs) that stem from the discretization of continuous-time Hamiltonian systems and include several existing DNN architectures based on ordinary differential equations. Our main result is that a broad set of H-DNNs ensures non-vanishing gradients by design for an arbitrary network depth. This is obtained by proving that, using a semi-implicit Euler discretization scheme, the backward sensitivity matrices involved in gradient computations are symplectic. We also provide an upper-bound to the magnitude of sensitivity matrices and show that exploding gradients can be controlled through regularization. Finally, we enable distributed implementations of backward and forward propagation algorithms in H-DNNs by characterizing appropriate sparsity constraints on the weight matrices. The good performance of H-DNNs is demonstrated on benchmark classification problems, including image classification with the MNIST dataset.
translated by 谷歌翻译
Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can ``leak'' onto the set of infinite sequences. In order to characterize the notion of leakage more precisely, this paper offers a measure-theoretic treatment of language modeling. We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense. We also generalize characterizations of tightness proposed in previous works.
translated by 谷歌翻译
After just a few hundred training updates, a standard probabilistic model for language generation has likely not yet learnt many semantic or syntactic rules of natural language, which inherently makes it difficult to estimate the right probability distribution over next tokens. Yet around this point, these models have identified a simple, loss-minimising behaviour: to output the unigram distribution of the target training corpus. The use of such a crude heuristic raises the question: Rather than wasting precious compute resources and model capacity for learning this strategy at early training stages, can we initialise our models with this behaviour? Here, we show that we can effectively endow our model with a separate module that reflects unigram frequency statistics as prior knowledge. Standard neural language generation architectures offer a natural opportunity for implementing this idea: by initialising the bias term in a model's final linear layer with the log-unigram distribution. Experiments in neural machine translation demonstrate that this simple technique: (i) improves learning efficiency; (ii) achieves better overall performance; and (iii) appears to disentangle strong frequency effects, encouraging the model to specialise in non-frequency-related aspects of language.
translated by 谷歌翻译
Recent developments of advanced driver-assistance systems necessitate an increasing number of tests to validate new technologies. These tests cannot be carried out on track in a reasonable amount of time and automotive groups rely on simulators to perform most tests. The reliability of these simulators for constantly refined tasks is becoming an issue and, to increase the number of tests, the industry is now developing surrogate models, that should mimic the behavior of the simulator while being much faster to run on specific tasks. In this paper we aim to construct a surrogate model to mimic and replace the simulator. We first test several classical methods such as random forests, ridge regression or convolutional neural networks. Then we build three hybrid models that use all these methods and combine them to obtain an efficient hybrid surrogate model.
translated by 谷歌翻译
The material science literature contains up-to-date and comprehensive scientific knowledge of materials. However, their content is unstructured and diverse, resulting in a significant gap in providing sufficient information for material design and synthesis. To this end, we used natural language processing (NLP) and computer vision (CV) techniques based on convolutional neural networks (CNN) to discover valuable experimental-based information about nanomaterials and synthesis methods in energy-material-related publications. Our first system, TextMaster, extracts opinions from texts and classifies them into challenges and opportunities, achieving 94% and 92% accuracy, respectively. Our second system, GraphMaster, realizes data extraction of tables and figures from publications with 98.3\% classification accuracy and 4.3% data extraction mean square error. Our results show that these systems could assess the suitability of materials for a certain application by evaluation of synthesis insights and case analysis with detailed references. This work offers a fresh perspective on mining knowledge from scientific literature, providing a wide swatch to accelerate nanomaterial research through CNN.
translated by 谷歌翻译
卵巢癌是最致命的妇科恶性肿瘤。该疾病在早期阶段最常是无症状的,其诊断依赖于经阴道超声图像的专家评估。超声是表征附加质量的一线成像方式,它需要大量的专业知识,其分析是主观的和劳动的,因此易于误差。因此,在临床实践中需要进行自动化的过程,以促进和标准化扫描评估。使用监督的学习,我们证明了附加质量的分割是可能的,但是,患病率和标签不平衡限制了代表性不足的类别的性能。为了减轻这种情况,我们应用了一种新颖的病理学数据合成器。我们通过使用Poisson图像编辑将较少常见的质量整合到其他样品中,从而创建及其相应的地面真实分割的合成医学图像。我们的方法在所有班级中都取得了最佳性能,包括与NNU-NET基线方法相比,提高了多达8%。
translated by 谷歌翻译
这项工作的目的是通过根据求职者的简历提供无偏见的工作建议来帮助减轻已经存在的性别工资差距。我们采用生成的对抗网络来从12m职位空缺文本和900k简历的Word2VEC表示中删除性别偏见。我们的结果表明,由招聘文本创建的表示形式包含算法偏见,并且这种偏见会对推荐系统产生实际后果。在没有控制偏见的情况下,建议妇女在我们的数据中薪水明显降低。有了对手公平的代表,这种工资差距消失了,这意味着我们的辩护工作建议减少了工资歧视。我们得出的结论是,单词表示形式的对抗性偏见可以增加系统的真实世界公平性,因此可能是创建公平感知推荐系统的解决方案的一部分。
translated by 谷歌翻译
临床记录经常包括对患者特征的评估,其中可能包括完成各种问卷。这些问卷提供了有关患者当前健康状况的各种观点。捕获这些观点给出的异质性不仅至关重要,而且对开发具有成本效益的技术的临床表型技术的需求增长。填写许多问卷可能是患者的压力,因此昂贵。在这项工作中,我们提出了钴 - 一种基于成本的层选择器模型,用于使用社区检测方法检测表型。我们的目标是最大程度地减少用于构建这些表型的功能的数量,同时保持其质量。我们使用来自慢性耳鸣患者的问卷数据测试我们的模型,并在多层网络结构中代表数据。然后,通过使用基线特征(年龄,性别和治疗前数据)以及确定的表型作为特征来评估该模型。对于某些治疗后变量,使用来自钴的表型作为特征的预测因素优于使用传统聚类方法检测到的表型的预测因素。此外,与仅接受基线特征训练的预测因子相比,使用表型数据预测治疗后数据被证明是有益的。
translated by 谷歌翻译